Introduction

[No abstract is needed.] Each replication project will have a straightforward, no frills report of the study and results. These reports will be publicly available as supplementary material for the aggregate report(s) of the project as a whole. Also, to maximize project integrity, the intro and methods will be written and critiqued in advance of data collection. Introductions can be just 1-2 paragraphs clarifying the main idea of the original study, the target finding for replication, and any other essential information. It will NOT have a literature review – that is in the original publication. You can write both the introduction and the methods in past tense.

In Study 2 of the report “Differentiate to Regulate: Low Negative Emotion Differentiation Is Associated With Ineffective Use but Not Selection of Emotion-Regulation Strategies”, Kalokerinos and colleagues, (2019) examined how the ability to make fine-grained distinctions between emotional states (i.e. emotion differentiation) was related to the selection and efficacy of emotion regulation (ER) strategies amidst a high-intensity emotional event.

The authors initiated students on a nine day experience sampling method (ESM) regimen two days prior to the students receiving a highly-consequential test result. The student participants completed ten surveys per day for the nine days delivered via an application on their smart phones or via a special mobile phone provided by the experimenters. Since the present study concerns regulation after and emotional event, only data collected after the results were released are analyzed, resulting in 7 days of ESM data.

The data collected in each survey were:

  • ratings of six discrete negative emotions (sad, angry, disappointed, ashamed, anxious, stressed) on a scale of 1-100 in response to thinking about the participant’s grade on the exam

  • ratings of the use of six ER strategies (rumination, distraction, reappraisal, expressive suppression, social sharing, acceptance) on a 7-point scale

  • score on the students’ test, (dichotimized in the analysis into passing and failing groups).

I chose to reproduce this result for two reasons. The first and primary reason is that I am collecting EMA data where I plan to do a similar type of lagged panel model. Reproducing this analysis will provide helpful practice in this type of modeling. The second reason is that I am interested in relationships between emotion differentiation abilities and emotion regulation processes. Thus, the results of this paper are of personal importance to me.

The Github repository containing all the materials for this reproduction can be found here. The original research report PDF can be found here.

Methods

Power Analysis

Original effect size, power analysis for samples to achieve 80%, 90%, 95% power to detect that effect size. Considerations of feasibility for selecting planned sample size.

Sample

Planned sample size and/or termination rule, sampling frame, known demographics if any, preselection rules if any.

The sample for this dataset consisted of first-year Belgian undergraduate students. The planned N was 100 participants which would have 80% power to detect a medium effect (r = .30, α = .05). The final N for the study was 101 participants (14 males; age: M = 18.64, SD = 1.45). Participans were recruited through a university research participation program and through social media. The authors had a predetermined rule for omission of participants with less than 50% completion, but no participants met this criterion so all were included in analysis. Participants were awarded compensation on a basis commensurate with their study completion percentage.

Materials

All materials - can quote directly from original article - just put the text in quotations and note that this was followed precisely. Or, quote directly and just point out exceptions to what was described in the original article.

The materials for the study are taken directly from the original paper and copied below:

Negative emotion.

Six emotions (sad, angry, disappointed, ashamed, anxious, stressed) were assessed on a 100-point scale (1 = not at all, 100 = very much). The item stem was “When you think about your grades right now, how [emotion] are you feeling?” (RKF = .99, RC = .74). In this study, we updated this measure to include emotions relevant to the context of receiving learning outcomes (Pekrun, 2006). We kept “sad,” “angry,” “anxious,” and “stressed” from Study 1, as the former three are also learning-related emotions (Pekrun, 2006), and continuity across studies allowed for comparison. However, differentiation should replicate across the inclusion of different emotions if each of the emotions provides new information. We added “disappointed” and “ashamed” because of their centrality in retrospectively evaluating learning outcomes (Pekrun, 2006).

Negative-emotion differentiation.

As in Study 1, we took the ICC between negative emotions within-person across measurement occasions, applied a Fisher’s z transformation, and then reverse scored it so higher numbers equaled higher differentiation. There were no negative ICCs.

Emotion-regulation strategies.

We assessed six strategies on a 7-point scale (0 = not at all, 6 = very much). The item stem was “Since the last beep, have you . . .” Five strategies from Study 1 were reworded to assess grade-relevant regulation: rumination (“ruminated about your grades?”), distraction (“distracted yourself from your grades and the associated emotions?”), reappraisal (“looked at your grades or the emotions that go with them from another perspective?”), expressive suppression (“suppressed the outward expression of your emotions about your grades?”), and social sharing (“talked to others about your grades and the associated emotions?”). We also included acceptance (“accepted your emotions about your grades the way they are?”).

Percentage passed.

For each subject, participants reported scores out of 20, with 10 and above being a passing grade and below 10 a failing grade. Failing requires retaking the exam later in the year or, in the case of too many failures, termination of enrollment. Given the clear emotional line at passing, we dichotomized scores on each subject as fail (1–9) or pass (10–20) and calculated the percentage of subjects passed across all subjects taken. This percentage variable was highly correlated with mean score out of 20 across exams (r = .90), and we found no differences in reported results when using mean score instead of percentage passed. In the baseline survey, we assessed participants’ expectations about their upcoming exam grades using the same measure. We used this to compute an expected-percentage-passed variable. Including both expected and actual passing percentage, or the difference between actual and expected passing percentage, did not substantively change our results. Thus, we focus on actual passing percentage.

Procedure

Can quote directly from original article - just put the text in quotations and note that this was followed precisely. Or, quote directly and just point out exceptions to what was described in the original article.

The procedure for data collection is copied from the original article below:

Three days before receiving results, participants came to a lab session where they were trained on the ESM protocol. Participants were told that the study was about emotions and exams but were not given details about specific hypotheses. They then completed the ESM phase. On results-release day, within a 2-hr period, students were notified by e-mail that results were available in an online portal and asked to check them immediately. On this day, participants were sent a link to an online survey asking them to report their grade for each subject. For the ESM protocol, participants with a compatible personal Android phone installed mobileQ (N = 28). Other participants were given a research-only smartphone (N = 73). Participants completed 9 consecutive days of experience sampling: 2 days before the results release and 7 days after. We used a stratified random-interval scheme that sent a random signal within 10 equal intervals between 10:00 a.m. and 10:00 p.m. There was some variability in when results were released: Participants received their results between surveys 21 and 28 of 90. We were interested in regulation in response to results, and thus we included only post-results surveys, meaning that participants received between 63 and 70 surveys (M = 68.69). Participants received a signal on average every 71.9 min (SD = 29.8) and completed an average of 90.5% of signals (SD = 7.8%).

Analysis Plan

Can also quote directly, though it is less often spelled out effectively for an analysis strategy section. The key is to report an analysis strategy that is as close to the original - data cleaning rules, data exclusion rules, covariates, etc. - as possible.

Clarify key analysis of interest here You can also pre-specify additional analyses you plan to do.

The data analysis conducted in the original study consisted of two model structures built separately with each of the six emotion regulation strategies, resulting in twelve total models. The description of the data analytic strategy from the original article is copied below:

Data-analytic strategy

As in Study 1, we used lme4 (Bates et al., 2015) to fit mixed-effects models and standardized variables for analyses. We ran two-level models, with measurement occasions (N = 6,282) nested within persons (N = 101). In these models, we included percentage pass as a proxy for the emotional intensity of the stimulus. However, because we did not have the necessary statistical power, we did not estimate a three-way interaction with this variable. Strategies and negative emotion were measured at the occasion level, and differentiation and percentage passed at the person level. We found no substantive differences in either model when person-level negative emotion was included, but we included this variable in Model 1 to replicate Study 1.

Model 1: emotion differentiation as a predictor of emotion-regulation strategies.

In Model 1, we used differentiation, percentage passed, and negative emotion, which were grand-mean centered, to predict each strategy separately (six models). We included random intercepts per participant.

Model 2: Emotion Differentiation × Emotion Regulation Strategies predicting negative emotion.

In Model 2, we used differentiation, regulation, their cross-level interaction, and percentage passed to predict negative emotion (separately for each strategy; six models). We included lagged negative emotion (at the previous time point) to model emotional change, again excluding overnight lags. We person-mean-centered regulation and lagged emotion and grand-mean-centered differentiation and percentage passed. We included random intercepts per participant. For each participant, we included random slopes for regulation and lagged emotion, and we allowed these slopes to covary. There was one exception to this strategy: The acceptance model would not converge until we removed the random slope for acceptance, so we report this model with this random slope omitted.

Differences from Original Study

Explicitly describe known differences in sample, setting, procedure, and analysis plan from original study. The goal, of course, is to minimize those differences, but differences will inevitably occur. Also, note whether such differences are anticipated to make a difference based on claims in the original article or subsequent published research on the conditions for obtaining the effect.

Since this is an analysis reproduction, the data will be exactly the same. I plan to run this analysis using lme4 just as the authors did. I will also conduct the same analysis using the brms Bayesian Regression package in R so I can compare the credible intervals and expected values of the posterior distributions with the confidence intervals and point estimates given by the lme4 linear mixed model framework.

Methods Addendum (Post Data Collection)

You can comment this section out prior to final report with data collection.

Actual Sample

Sample size, demographics, data exclusions based on rules spelled out in analysis plan

Differences from pre-data collection methods plan

Any differences from what was described as the original plan, or “none”.

Results

Data preparation

Data preparation following the analysis plan.

Load Relevant Libraries and Functions

# rm(list = ls())
library(tidyverse)
## ── Attaching packages ────────────────────────────────────────── tidyverse 1.2.1 ──
## ✔ ggplot2 3.2.1     ✔ purrr   0.3.3
## ✔ tibble  2.1.3     ✔ dplyr   0.8.3
## ✔ tidyr   1.0.0     ✔ stringr 1.4.0
## ✔ readr   1.3.1     ✔ forcats 0.4.0
## ── Conflicts ───────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
library(haven)
library(psych)
## 
## Attaching package: 'psych'
## The following objects are masked from 'package:ggplot2':
## 
##     %+%, alpha
library(here)
## here() starts at /Users/ashish/files/phd_coursework/fall_2019/psy_251_methods/replication/kalokerinos2019
library(broom.mixed)
library(glue)
## 
## Attaching package: 'glue'
## The following object is masked from 'package:dplyr':
## 
##     collapse
library(lmerTest)
## Loading required package: lme4
## Loading required package: Matrix
## 
## Attaching package: 'Matrix'
## The following objects are masked from 'package:tidyr':
## 
##     expand, pack, unpack
## 
## Attaching package: 'lmerTest'
## The following object is masked from 'package:lme4':
## 
##     lmer
## The following object is masked from 'package:stats':
## 
##     step
library(riclpmr)
library(lavaan)
## This is lavaan 0.6-5
## lavaan is BETA software! Please report any bugs.
## 
## Attaching package: 'lavaan'
## The following object is masked from 'package:psych':
## 
##     cor2cov
theme_set(theme_minimal())

Import data

df_raw <- read_sav(here("data/Study 2_exam.sav"))

Data exclusion / filtering

df_raw_complete <- df_raw #%>%
  # filter(!is.na(ER_reapp))

Prepare data for analysis - create columns etc.

Column names of each emotion valence category
neg_emo_cols <- c("emotion_sad", "emotion_angry", "emotion_disapp",
                  "emotion_ashamed", "emotion_anxious", "emotion_stressed")
pos_emo_cols <- c("emotion_proud", "emotion_happy", "emotion_content", "emotion_relief")
er_strategy_cols <- c("ER_acceptance", "ER_rumination", "ER_reapp", 
                      "ER_supp", "ER_soc_sharing", "ER_distraction")
Data munging

Here I am creating useful columns. Most of these are either grand-mean scaled (varname_sc) or centered (varname_c), person-scaled (varname_psc), or lagged (varname_lag).

df_raw_post_exam <- df_raw_complete %>%
  # ---- Keep only reports after exam results were received ---- #
  filter(exam_beepnum >= 0)

df_clean <- df_raw_post_exam  %>% 
  # ---- Create pos/neg emotion intensity composite scores ---- #
  mutate(
    neg_emo_comp = rowMeans(df_raw_post_exam[,neg_emo_cols], na.rm = T),
    pos_emo_comp = rowMeans(df_raw_post_exam[,pos_emo_cols], na.rm = T),
    perc_pass = perc_pass*100
  ) %>% 
  # ---- Create person level mean negative emotion score (covariate in models) ---- # 
  group_by(Participant) %>% mutate(person_neg_emo_mean = mean(neg_emo_comp, na.rm = T)) %>% ungroup %>% 
  # ---- Create lagged emotion and strategy vars ---- #
  group_by(Participant, beepday) %>% 
  mutate_at(vars(matches("emo|^ER_")), list(lag = ~  lag(.))) %>% 
  ungroup %>% 
  # ---- Person scale emotion and strategy vars ---- #
  group_by(Participant) %>% 
  mutate_at(vars(matches("emo|^ER_"), -matches("_sc$|_c$|_psc$|_pc$")), list(pc = ~scale(., T, F))) %>% 
  mutate_at(vars(matches("emo|^ER_"), -matches("_sc$|_c$|_psc$|_pc$")), list(psc = ~scale(.))) %>% 
  ungroup %>% 
  # ---- Create grand mean centered emotion and strategy vars ---- #
  mutate_at(vars(matches("emo|^ER_|^perc_pass$"), 
                 -matches("_sc$|_c$")), 
            list(c = ~scale(., center=T, scale=F))) %>%
  # ---- Create grand mean scaled emotion and strategy vars ---- #
  mutate_at(vars(matches("emo|^ER_|^perc_pass$"), 
                 -matches("_sc$|_c$")), 
            list(sc = ~scale(.))) #%>%
  
  
  # mutate_at(vars(matches("emo|^ER_")), ~ifelse(all(is.na(.)), 0, .))
Function to calculate ICC stats in row form
compute_icc <- function(dat) {
  dat %>%
    select(neg_emo_cols) %>% 
    irr::icc(model="twoway", unit = "average") %>%       
    unlist %>%        
    t %>%              
    as_tibble %>%      
    select(              
      stimuli = subjects,
      raters, 
      icc = value,       
      lbound, 
      ubound
    ) %>%
    mutate_at(vars(stimuli, raters), as.integer) %>% 
    mutate_at(vars(icc:ubound), as.numeric)
}
Create dataframe of ICCs for each participant and Fisher-z transform
icc_df <- df_clean %>%
  group_by(Participant) %>%              
  nest() %>%                             
  mutate(icc = map(data, compute_icc)) %>%    
  unnest(icc) %>%                        
  select(-data) %>% 
  ungroup %>% 
  mutate(
    icc_fz = fisherz(icc),
    ed_score = icc_fz*-1,
    ed_score_class = case_when(
      ed_score > (mean(ed_score, na.rm = T) + sd(ed_score)) ~ "> +1 SD",
      ed_score < (mean(ed_score, na.rm = T) - sd(ed_score)) ~ "< -1 SD"
      ) 
    )
Merge ICC dataframe to full data
df <- icc_df %>% 
  select(Participant, icc, ed_score, ed_score_class) %>% 
  right_join(df_clean, by="Participant") %>% 
  mutate(ed_score_sc = scale(ed_score))
df <- df %>%
  filter(!is.na(ER_reapp))

Initial data viz

Visualizing participant compliance
df %>% 
  group_by(Participant) %>% 
  summarize(proportion_complete = sum(!is.na(ExecutionTime))/90) %>% 
  ungroup %>% 
  ggplot(aes(x = proportion_complete)) + 
  geom_histogram(binwidth = .02) +
  # geom_density() + 
  labs(x = "Proportion complete", x = "Number of participants")

Visualizing emotion characteristics

Create a long dataframe for visualizing emotion characteristics
df_emotion_long <- df %>% 
  group_by(Participant, perc_pass) %>% 
  summarize_at(vars(starts_with("emotion_"), 
                    -matches("_c$|_sc$|_psc$|_lag|_pc$")), 
               mean, na.rm=T) %>% 
  pivot_longer(cols = c(-Participant, -perc_pass), 
               names_to = "emotion_type", values_to = "emotion_rating") %>% 
  mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
         emotion_type = reorder(emotion_type, emotion_rating, mean)) %>% 
  left_join(select(icc_df, Participant, ed_score)) %>% 
  mutate(ed_score_class = case_when(
    ed_score > (mean(.$ed_score, na.rm = T) + sd(.$ed_score)) ~ "> +1 SD",
    ed_score < (mean(.$ed_score, na.rm = T) - sd(.$ed_score)) ~ "< -1 SD",
    TRUE ~ "</> 1 SD"
  ))
## Joining, by = "Participant"
To what degree did people experience different emotions?
df %>% 
  group_by(Participant) %>% 
  summarize_at(vars(starts_with("emotion_"), 
                    -matches("_c$|_sc$|_psc$|_lag|_pc")), 
               mean, na.rm=T) %>% 
  pivot_longer(cols = -Participant, names_to = "emotion_type", values_to = "emotion_rating") %>% 
  mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
         emotion_type = reorder(emotion_type, emotion_rating, mean)) %>% 
  
  ggplot(aes(x = emotion_type, y = emotion_rating)) +
  geom_bar(stat="summary") +
  theme(axis.text.x = element_text(angle=90)) +
  labs(x = "Emotion type", y = "Mean emotion intensity")
## No summary function supplied, defaulting to `mean_se()

How did different emotions unfold over time?
df %>% 
  filter(exam_beepnum >= 0) %>% 
  group_by(Participant) %>% 
  ggplot(aes(x = exam_beepnum, y = neg_emo_comp)) + 
  geom_jitter(alpha = .2, size = .4) +
  geom_smooth() + 
  geom_smooth(method="lm", linetype="dotted") +
  labs(x = "EMA beep number", y = "Negative emotion mean")
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'

perc_pass_summary <- df %>% 
  group_by(Participant) %>% 
  summarize(perc_pass = mean(perc_pass, na.rm = T)) %>% 
  summarize(mean = mean(perc_pass, na.rm = T),
            sd = sd(perc_pass, na.rm = T))

df %>% 
  # # ---- create grouping by +/- 1 SD
  # mutate(
  #   perc_pass_group = case_when(
  #     perc_pass > (perc_pass_summary$mean + perc_pass_summary$sd) ~ "high",
  #     perc_pass < (perc_pass_summary$mean - perc_pass_summary$sd) ~ "low",
  #     perc_pass < (perc_pass_summary$mean + perc_pass_summary$sd) &
  #       perc_pass > (perc_pass_summary$mean - perc_pass_summary$sd) ~"mid"
  #   )
  # ) %>% 
  # filter(perc_pass_group == "mid") %>%
  group_by(Participant, exam_beepnum) %>% 
  summarize_at(vars(starts_with("emotion_"), 
                    -matches("_sc$|_c$|_psc$|_pc$|_lag$")), 
               mean, na.rm=T) %>% 
  pivot_longer(cols = c(-Participant, -exam_beepnum), 
               names_to = "emotion_type", 
               values_to = "emotion_rating") %>% 
  mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
         emotion_type = reorder(emotion_type, emotion_rating, mean))  %>% 
  
  ggplot(aes(x = exam_beepnum, y = emotion_rating, color = emotion_type)) +
  geom_jitter(alpha = .2, size = .1) +
  geom_smooth(se=F) +
  labs(x = "Ping number relative to exam result", y = "Emotion intensity", color = "")
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'

Emotion over time broken down by emotion type
df %>% 
  group_by(Participant, exam_beepnum) %>% 
  summarize_at(vars(starts_with("emotion_"), -matches("_sc$|_c$|_psc$|_pc$|_lag")), mean, na.rm=T) %>% 
  pivot_longer(cols = c(-Participant, -exam_beepnum), names_to = "emotion_type", values_to = "emotion_rating") %>% 
  mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
         emotion_type = reorder(emotion_type, emotion_rating, mean))  %>% 
  
  ggplot(aes(x = exam_beepnum, y = emotion_rating, color = emotion_type)) +
  facet_grid(emotion_type ~.) +
  geom_jitter(alpha = .1, size = .2) +
  geom_smooth() +
  theme(axis.text.x = element_text(angle=90)) +
  labs(x = "Ping number relative to exam result", y = "Emotion intensity")
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'

What is the relationship between percentage of exams passed and negative emotion?

Note that the variance around negative emotion increases starkly as people performed worse on exams. This could pose a problem for linear modeling.

df %>% 
  group_by(Participant, perc_pass) %>% 
  summarize_at(vars(starts_with("emotion_"), -matches("_sc$|_c$|_psc$|_pc$|_lag")), mean, na.rm=T) %>% 
  pivot_longer(cols = c(-Participant, -perc_pass), 
               names_to = "emotion_type", values_to = "emotion_rating") %>% 
  filter(emotion_type %in% neg_emo_cols) %>% 
  mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
         emotion_type = reorder(emotion_type, emotion_rating, mean))  %>% 
  summarize(mean_neg_emo = mean(emotion_rating),
            perc_pass = perc_pass[1]) %>% 
  
  ggplot(aes(x = perc_pass, y = mean_neg_emo)) +
  geom_jitter(alpha = .2) +
  geom_smooth() +
  labs(x = "Percentage of exams passed", y = "Mean negative emotion")
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'

Visualizing regulation characteristics

To what degree did people use different strategies?

Acceptance has a way outsized share of strategy usage. It is difficult to measure acceptance in this context since it can easily be misintepreted as accepting your exam scores rather than accepting negative emotions.

df %>% 
  group_by(Participant) %>% 
  summarize_at(vars(starts_with("ER_"),
                    -matches("_sc$|_c$|_psc$|_pc$|_lag$")),
               mean, na.rm=T) %>% 
  pivot_longer(cols = -Participant, 
               names_to = "strategy_type", values_to = "strategy_rating") %>% 
  mutate(strategy_type = str_replace_all(strategy_type, "ER_", ""),
         strategy_type = reorder(strategy_type, strategy_rating, mean)) %>% 
  
  ggplot(aes(x = strategy_type, y = strategy_rating)) +
  geom_bar(stat="summary") +
  stat_summary(geom="errorbar", fun.data = "mean_cl_boot", width = .1) +
  theme(axis.text.x = element_text(angle=90)) +
  scale_y_continuous(limits = c(0,6)) +
  labs(x = "Regulation strategy", 
       y = "Mean strategy use")
## No summary function supplied, defaulting to `mean_se()

How does emotion intensity at time t-1 predict ER strategy at time t?
df %>% 
  pivot_longer(cols = er_strategy_cols, 
               names_to = "strategy_type", 
               values_to = "strategy_rating") %>% 
  
  ggplot(aes(x = neg_emo_comp_lag, y = strategy_rating, color = strategy_type)) +
  # facet_grid(strategy_type~.) +
  geom_jitter(alpha = .05, size = .5) +
  geom_smooth(method = "lm") +
  geom_smooth(se=F, linetype = "dashed", size = .3) +
  labs(x = "Emotion intensity at t-1", y = "Use of ER strategy")
## Warning: Removed 6402 rows containing non-finite values (stat_smooth).
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'
## Warning: Removed 6402 rows containing non-finite values (stat_smooth).
## Warning: Removed 6402 rows containing missing values (geom_point).

how does each strategy predict negative emotion with emotion differentiation moderator?

(PLOT FROM PAPER)

plot_df <- df %>%
  select(Participant,
         ed_score,
         ed_score_class,
         matches("ER_.*_pc_sc$"),
         -matches("_c$|_psc$|_lag|_pc$"),
         neg_emo_comp) %>%
  pivot_longer(cols = c(-Participant, -neg_emo_comp, -ed_score, -ed_score_class),
               names_to = "strategy_type",
               values_to = "strategy_rating") %>%
  mutate(
    ed_score = scale(ed_score, T, F),
    strategy_type = str_replace_all(strategy_type, "ER_", ""),
    strategy_type = reorder(strategy_type, strategy_rating, mean)) %>% 
  mutate(strategy_type=recode(strategy_type,
    "rumination_pc_sc" = "Rumination",
    "reapp_pc_sc" = "Reappraisal",
    "soc_sharing_pc_sc" = "Social sharing",
    "acceptance_pc_sc" = "Acceptance",
    "distraction_pc_sc" = "Distraction",
    "supp_pc_sc" = "Suppression"
  ))


plot_df %>%
  ggplot(aes(x = strategy_rating, y = neg_emo_comp, color = ed_score)) +
  geom_jitter(alpha = .1, size = .2) +
  geom_smooth(data=plot_df[!is.na(plot_df$ed_score_class),],
              aes(linetype = ed_score_class),
              method = "lm",
              se=F, color="black", size = .3) +
  facet_wrap(~strategy_type) +
  scale_x_continuous(breaks = seq(floor(min(plot_df$strategy_rating)), ceiling(max(plot_df$strategy_rating)), 2)) +
  scale_colour_gradient2(low = "red", mid = "lightcyan", high = "blue") +
  labs(x = "Strategy usage",
       y = "Negative emotion intensity", 
       color = "Emotion differention",
       linetype = "")

# plot_df <- df %>% 
#   select(Participant, 
#          ed_score, 
#          ed_score_class, 
#          matches("ER_"),
#          -matches("_c$|_psc$|_lag|_sc$|_pc$"), 
#          neg_emo_comp) %>% 
#   pivot_longer(cols = c(-Participant, -neg_emo_comp, -ed_score, -ed_score_class), 
#                names_to = "strategy_type", 
#                values_to = "strategy_rating") %>% 
#   mutate(
#     ed_score = scale(ed_score, T, F),
#     strategy_type = str_replace_all(strategy_type, "ER_", ""),
#     strategy_type = reorder(strategy_type, strategy_rating, mean)) 
# plot_df %>% 
#   ggplot(aes(x = strategy_rating, y = neg_emo_comp, color = ed_score)) +
#   geom_jitter(alpha = .1, size = .2) +
#   geom_smooth(data=plot_df[!is.na(plot_df$ed_score_class),], 
#               aes(linetype = ed_score_class), 
#               method = "lm", 
#               se=F, color="black", size = .3) +
#   facet_wrap(~strategy_type) +
#   scale_x_continuous(breaks = seq(floor(min(plot_df$strategy_rating)), ceiling(max(plot_df$strategy_rating)), 2)) +
#   scale_colour_gradient2(low = "red", mid = "lightcyan", high = "blue")
#   

Confirmatory analysis

The analyses as specified in the analysis plan.

Side-by-side graph with original graph is ideal here

Starting the modeling …

build_model1 <- function(x){
  lmer(
    as.formula(glue::glue(
    "{x} ~ ed_score_sc + 
         perc_pass_sc + 
         person_neg_emo_mean_sc +
         (1 | Participant)")), df) %>%
    broom.mixed::tidy() %>% 
    mutate(strategy = x,
           strategy = str_replace(strategy, "ER_", ""),
           strategy = str_replace(strategy, "_sc", "")) %>% 
    select(strategy, everything())
}

result_1 <- map(paste0(er_strategy_cols, "_sc"), build_model1) %>%
  bind_rows

result_1 %>% sjPlot::tab_df(digits = 3)
strategy effect group term estimate std.error statistic df p.value
acceptance fixed NA (Intercept) 0.003 0.069 0.044 97.058 0.965
acceptance fixed NA ed_score_sc 0.099 0.071 1.403 97.057 0.164
acceptance fixed NA perc_pass_sc -0.113 0.089 -1.270 97.061 0.207
acceptance fixed NA person_neg_emo_mean_sc -0.428 0.088 -4.857 97.097 0.000
acceptance ran_pars Participant sd__(Intercept) 0.694 NA NA NA NA
acceptance ran_pars Residual sd__Observation 0.620 NA NA NA NA
rumination fixed NA (Intercept) 0.002 0.055 0.041 97.016 0.968
rumination fixed NA ed_score_sc -0.133 0.056 -2.396 97.013 0.018
rumination fixed NA perc_pass_sc 0.022 0.070 0.309 97.023 0.758
rumination fixed NA person_neg_emo_mean_sc 0.350 0.070 5.029 97.110 0.000
rumination ran_pars Participant sd__(Intercept) 0.542 NA NA NA NA
rumination ran_pars Residual sd__Observation 0.763 NA NA NA NA
reapp fixed NA (Intercept) -0.004 0.066 -0.061 97.140 0.951
reapp fixed NA ed_score_sc -0.071 0.067 -1.059 97.139 0.292
reapp fixed NA perc_pass_sc -0.048 0.085 -0.562 97.145 0.575
reapp fixed NA person_neg_emo_mean_sc 0.166 0.084 1.975 97.199 0.051
reapp ran_pars Participant sd__(Intercept) 0.658 NA NA NA NA
reapp ran_pars Residual sd__Observation 0.723 NA NA NA NA
supp fixed NA (Intercept) 0.002 0.068 0.031 97.044 0.975
supp fixed NA ed_score_sc -0.163 0.069 -2.344 97.043 0.021
supp fixed NA perc_pass_sc 0.047 0.087 0.541 97.048 0.590
supp fixed NA person_neg_emo_mean_sc 0.343 0.087 3.952 97.088 0.000
supp ran_pars Participant sd__(Intercept) 0.682 NA NA NA NA
supp ran_pars Residual sd__Observation 0.647 NA NA NA NA
soc_sharing fixed NA (Intercept) -0.002 0.045 -0.053 97.280 0.958
soc_sharing fixed NA ed_score_sc -0.094 0.045 -2.063 97.273 0.042
soc_sharing fixed NA perc_pass_sc 0.119 0.057 2.081 97.293 0.040
soc_sharing fixed NA person_neg_emo_mean_sc 0.181 0.057 3.198 97.466 0.002
soc_sharing ran_pars Participant sd__(Intercept) 0.435 NA NA NA NA
soc_sharing ran_pars Residual sd__Observation 0.887 NA NA NA NA
distraction fixed NA (Intercept) -0.007 0.080 -0.086 97.056 0.932
distraction fixed NA ed_score_sc 0.042 0.082 0.519 97.055 0.605
distraction fixed NA perc_pass_sc -0.069 0.103 -0.672 97.058 0.503
distraction fixed NA person_neg_emo_mean_sc 0.071 0.102 0.699 97.082 0.486
distraction ran_pars Participant sd__(Intercept) 0.804 NA NA NA NA
distraction ran_pars Residual sd__Observation 0.590 NA NA NA NA
build_model2 <- function(x){
  df %>% 
  lmer(
    as.formula(glue::glue(
    "neg_emo_comp_sc ~ 
    ed_score_sc*{x} +
    perc_pass_sc + 
    neg_emo_comp_lag_pc_sc +
    ({x} + neg_emo_comp_lag_pc_sc | Participant)")), .) %>%
    broom.mixed::tidy() %>% 
    mutate(strategy = x,
           strategy = str_replace(strategy, "ER_", ""),
           strategy = str_replace(strategy, "_pc_sc", "")) %>% 
    select(strategy, everything())
}

result_2 <- map(paste0(er_strategy_cols, "_pc_sc"), build_model2) %>%
  bind_rows
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00907095
## (tol = 0.002, component 1)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00264685
## (tol = 0.002, component 1)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00335315
## (tol = 0.002, component 1)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00429511
## (tol = 0.002, component 1)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.0029047
## (tol = 0.002, component 1)
result_2 %>% sjPlot::tab_df(digits = 3)
strategy effect group term estimate std.error statistic df p.value
acceptance fixed NA (Intercept) -0.004 0.074 -0.054 97.561 0.957
acceptance fixed NA ed_score_sc -0.044 0.074 -0.597 98.154 0.552
acceptance fixed NA ER_acceptance_pc_sc -0.037 0.011 -3.345 79.598 0.001
acceptance fixed NA perc_pass_sc -0.578 0.074 -7.819 98.587 0.000
acceptance fixed NA neg_emo_comp_lag_pc_sc 0.159 0.010 15.743 96.188 0.000
acceptance fixed NA ed_score_sc:ER_acceptance_pc_sc 0.031 0.010 3.040 74.728 0.003
acceptance ran_pars Participant sd__(Intercept) 0.747 NA NA NA NA
acceptance ran_pars Participant sd__ER_acceptance_pc_sc 0.089 NA NA NA NA
acceptance ran_pars Participant sd__neg_emo_comp_lag_pc_sc 0.076 NA NA NA NA
acceptance ran_pars Participant cor__(Intercept).ER_acceptance_pc_sc -0.084 NA NA NA NA
acceptance ran_pars Participant cor__(Intercept).neg_emo_comp_lag_pc_sc -0.273 NA NA NA NA
acceptance ran_pars Participant cor__ER_acceptance_pc_sc.neg_emo_comp_lag_pc_sc 0.445 NA NA NA NA
acceptance ran_pars Residual sd__Observation 0.310 NA NA NA NA
rumination fixed NA (Intercept) -0.000 0.074 -0.001 97.894 1.000
rumination fixed NA ed_score_sc -0.022 0.075 -0.287 98.057 0.775
rumination fixed NA ER_rumination_pc_sc 0.083 0.012 7.164 72.970 0.000
rumination fixed NA perc_pass_sc -0.583 0.075 -7.792 98.386 0.000
rumination fixed NA neg_emo_comp_lag_pc_sc 0.142 0.010 14.057 93.577 0.000
rumination fixed NA ed_score_sc:ER_rumination_pc_sc -0.025 0.012 -2.083 70.843 0.041
rumination ran_pars Participant sd__(Intercept) 0.746 NA NA NA NA
rumination ran_pars Participant sd__ER_rumination_pc_sc 0.083 NA NA NA NA
rumination ran_pars Participant sd__neg_emo_comp_lag_pc_sc 0.074 NA NA NA NA
rumination ran_pars Participant cor__(Intercept).ER_rumination_pc_sc -0.085 NA NA NA NA
rumination ran_pars Participant cor__(Intercept).neg_emo_comp_lag_pc_sc -0.142 NA NA NA NA
rumination ran_pars Participant cor__ER_rumination_pc_sc.neg_emo_comp_lag_pc_sc -0.194 NA NA NA NA
rumination ran_pars Residual sd__Observation 0.306 NA NA NA NA
reapp fixed NA (Intercept) -0.003 0.074 -0.044 97.845 0.965
reapp fixed NA ed_score_sc -0.030 0.075 -0.397 98.183 0.692
reapp fixed NA ER_reapp_pc_sc 0.024 0.011 2.284 58.083 0.026
reapp fixed NA perc_pass_sc -0.578 0.074 -7.786 98.682 0.000
reapp fixed NA neg_emo_comp_lag_pc_sc 0.162 0.010 16.000 94.411 0.000
reapp fixed NA ed_score_sc:ER_reapp_pc_sc -0.017 0.011 -1.546 66.219 0.127
reapp ran_pars Participant sd__(Intercept) 0.747 NA NA NA NA
reapp ran_pars Participant sd__ER_reapp_pc_sc 0.072 NA NA NA NA
reapp ran_pars Participant sd__neg_emo_comp_lag_pc_sc 0.075 NA NA NA NA
reapp ran_pars Participant cor__(Intercept).ER_reapp_pc_sc -0.183 NA NA NA NA
reapp ran_pars Participant cor__(Intercept).neg_emo_comp_lag_pc_sc -0.176 NA NA NA NA
reapp ran_pars Participant cor__ER_reapp_pc_sc.neg_emo_comp_lag_pc_sc -0.150 NA NA NA NA
reapp ran_pars Residual sd__Observation 0.316 NA NA NA NA
supp fixed NA (Intercept) -0.003 0.074 -0.038 97.742 0.970
supp fixed NA ed_score_sc -0.034 0.075 -0.451 98.152 0.653
supp fixed NA ER_supp_pc_sc 0.043 0.010 4.156 52.177 0.000
supp fixed NA perc_pass_sc -0.583 0.074 -7.848 98.368 0.000
supp fixed NA neg_emo_comp_lag_pc_sc 0.160 0.010 16.078 93.618 0.000
supp fixed NA ed_score_sc:ER_supp_pc_sc -0.017 0.010 -1.719 48.739 0.092
supp ran_pars Participant sd__(Intercept) 0.747 NA NA NA NA
supp ran_pars Participant sd__ER_supp_pc_sc 0.064 NA NA NA NA
supp ran_pars Participant sd__neg_emo_comp_lag_pc_sc 0.074 NA NA NA NA
supp ran_pars Participant cor__(Intercept).ER_supp_pc_sc -0.167 NA NA NA NA
supp ran_pars Participant cor__(Intercept).neg_emo_comp_lag_pc_sc -0.210 NA NA NA NA
supp ran_pars Participant cor__ER_supp_pc_sc.neg_emo_comp_lag_pc_sc 0.153 NA NA NA NA
supp ran_pars Residual sd__Observation 0.314 NA NA NA NA
soc_sharing fixed NA (Intercept) -0.002 0.074 -0.029 97.715 0.977
soc_sharing fixed NA ed_score_sc -0.032 0.075 -0.428 98.069 0.670
soc_sharing fixed NA ER_soc_sharing_pc_sc 0.037 0.008 4.481 63.301 0.000
soc_sharing fixed NA perc_pass_sc -0.587 0.074 -7.884 98.330 0.000
soc_sharing fixed NA neg_emo_comp_lag_pc_sc 0.156 0.010 15.782 93.756 0.000
soc_sharing fixed NA ed_score_sc:ER_soc_sharing_pc_sc -0.030 0.008 -3.630 62.748 0.001
soc_sharing ran_pars Participant sd__(Intercept) 0.747 NA NA NA NA
soc_sharing ran_pars Participant sd__ER_soc_sharing_pc_sc 0.061 NA NA NA NA
soc_sharing ran_pars Participant sd__neg_emo_comp_lag_pc_sc 0.072 NA NA NA NA
soc_sharing ran_pars Participant cor__(Intercept).ER_soc_sharing_pc_sc -0.085 NA NA NA NA
soc_sharing ran_pars Participant cor__(Intercept).neg_emo_comp_lag_pc_sc -0.208 NA NA NA NA
soc_sharing ran_pars Participant cor__ER_soc_sharing_pc_sc.neg_emo_comp_lag_pc_sc -0.030 NA NA NA NA
soc_sharing ran_pars Residual sd__Observation 0.315 NA NA NA NA
distraction fixed NA (Intercept) -0.003 0.074 -0.042 97.767 0.967
distraction fixed NA ed_score_sc -0.035 0.075 -0.470 98.100 0.639
distraction fixed NA ER_distraction_pc_sc 0.025 0.010 2.449 53.897 0.018
distraction fixed NA perc_pass_sc -0.575 0.075 -7.698 98.323 0.000
distraction fixed NA neg_emo_comp_lag_pc_sc 0.162 0.010 15.802 93.179 0.000
distraction fixed NA ed_score_sc:ER_distraction_pc_sc -0.026 0.010 -2.621 56.359 0.011
distraction ran_pars Participant sd__(Intercept) 0.747 NA NA NA NA
distraction ran_pars Participant sd__ER_distraction_pc_sc 0.069 NA NA NA NA
distraction ran_pars Participant sd__neg_emo_comp_lag_pc_sc 0.077 NA NA NA NA
distraction ran_pars Participant cor__(Intercept).ER_distraction_pc_sc 0.039 NA NA NA NA
distraction ran_pars Participant cor__(Intercept).neg_emo_comp_lag_pc_sc -0.196 NA NA NA NA
distraction ran_pars Participant cor__ER_distraction_pc_sc.neg_emo_comp_lag_pc_sc 0.093 NA NA NA NA
distraction ran_pars Residual sd__Observation 0.316 NA NA NA NA

Exploratory analyses

Any follow-up analyses desired (not required).

# 
# df_scaled <- df %>%
#   mutate_at(vars(matches(".*ER.*|.*emotion.*")), scale)
# 
# # model <- '
# #     level: 1
# #         neg_emo =~ lag_emotion_angry + lag_emotion_disapp + lag_emotion_ashamed + lag_emotion_sad + lag_emotion_anxious
# #         neg_emo ~ ER_distraction
# #     level: 2
# #         neg_emo =~ lag_emotion_angry + lag_emotion_disapp + lag_emotion_ashamed + lag_emotion_sad + lag_emotion_anxious
# # '
# # fit <- lavaan::sem(model, data = df_scaled, cluster = "Participant", fixed.x = FALSE)
# # summary(fit)
# 
# df_mod <- df_scaled %>% 
#   filter(exam_beepnum < 5) %>% 
#   select(Participant, exam_beepnum, matches("^neg_emo|^ER_reap"), perc_pass) %>% 
#   mutate(t = paste0("t", exam_beepnum)) %>% 
#   select(-exam_beepnum) %>% 
#   rename("reapp" = "ER_reapp",
#          "neg_emo" = "neg_emo_mean") %>% 
#   pivot_wider(id_cols = c("Participant", "perc_pass"),
#               names_from = "t", values_from = c("reapp", "neg_emo"))
# 
# var_groups <- list(
#   x=c("reapp_t0", "reapp_t1", "reapp_t2", "reapp_t3", "reapp_t4"),
#   y=c("neg_emo_t0", "neg_emo_t1", "neg_emo_t2", "neg_emo_t3", "neg_emo_t4"))
# model_text <- riclpmr::riclpm_text(var_groups)
# 
# mod_text <- "
# ri_reapp =~ 1*reapp_t0 + 1*reapp_t1 + 1*reapp_t2 + 1*reapp_t3 + 1*reapp_t4
# ri_neg =~ 1*neg_emo_t0 + 1*neg_emo_t1 + 1*neg_emo_t2 + 1*neg_emo_t3 + 1*neg_emo_t4
# 
# ri_reapp ~~ ri_reapp
# ri_neg ~~ ri_neg
# ri_reapp ~~ ri_neg
# 
# reapp_t0 ~ reapp_t0_mu*1
# reapp_t1 ~ reapp_t1_mu*1
# reapp_t2 ~ reapp_t2_mu*1
# reapp_t3 ~ reapp_t3_mu*1
# reapp_t4 ~ reapp_t4_mu*1
# 
# neg_emo_t0 ~ neg_emo_t0_mu*1
# neg_emo_t1 ~ neg_emo_t1_mu*1
# neg_emo_t2 ~ neg_emo_t2_mu*1
# neg_emo_t3 ~ neg_emo_t3_mu*1
# neg_emo_t4 ~ neg_emo_t4_mu*1
# 
# lat_reapp1 =~ 1*reapp_t0
# lat_reapp2 =~ 1*reapp_t1
# lat_reapp3 =~ 1*reapp_t2
# lat_reapp4 =~ 1*reapp_t3
# lat_reapp5 =~ 1*reapp_t4
# 
# lat_neg1 =~ 1*neg_emo_t0
# lat_neg2 =~ 1*neg_emo_t1
# lat_neg3 =~ 1*neg_emo_t2
# lat_neg4 =~ 1*neg_emo_t3
# lat_neg5 =~ 1*neg_emo_t4
# 
# lat_reapp1 ~~ lat_neg1
# lat_reapp2 ~~ r_reappneg*lat_neg2
# lat_reapp3 ~~ r_reappneg*lat_neg3
# lat_reapp4 ~~ r_reappneg*lat_neg4
# lat_reapp5 ~~ r_reappneg*lat_neg5
# 
# lat_reapp2 ~ reapp_reapp*lat_reapp1 + reapp_neg*lat_neg1
# lat_reapp3 ~ reapp_reapp*lat_reapp2 + reapp_neg*lat_neg2
# lat_reapp4 ~ reapp_reapp*lat_reapp3 + reapp_neg*lat_neg3
# lat_reapp5 ~ reapp_reapp*lat_reapp4 + reapp_neg*lat_neg4
# lat_neg2 ~ neg_reapp*lat_reapp1 + reapp_neg*lat_neg1
# lat_neg3 ~ neg_reapp*lat_reapp2 + reapp_neg*lat_neg2
# lat_neg4 ~ neg_reapp*lat_reapp3 + reapp_neg*lat_neg3
# lat_neg5 ~ neg_reapp*lat_reapp4 + reapp_neg*lat_neg4
# 
# lat_reapp1 ~~ lat_reapp1
# lat_reapp2 ~~ e_reapp*lat_reapp2
# lat_reapp3 ~~ e_reapp*lat_reapp3
# lat_reapp4 ~~ e_reapp*lat_reapp4
# lat_reapp5 ~~ e_reapp*lat_reapp5
# 
# lat_neg1 ~~ lat_neg1
# lat_neg2 ~~ e_neg*lat_neg2
# lat_neg3 ~~ e_neg*lat_neg3
# lat_neg4 ~~ e_neg*lat_neg4
# lat_neg5 ~~ e_neg*lat_neg5"
# 
# model_text <- "
# ri_x =~ 1*reapp_t0 + 1*reapp_t1 + 1*reapp_t2 + 1*reapp_t3 + 1*reapp_t4
# ri_y =~ 1*neg_emo_t0 + 1*neg_emo_t1 + 1*neg_emo_t2 + 1*neg_emo_t3 + 1*neg_emo_t4
# ri_x ~~ ri_x\nri_y ~~ ri_y\nri_x ~~ ri_y\nreapp_t0 ~ reapp_t0_mu*1
# reapp_t1 ~ reapp_t1_mu*1
# reapp_t2 ~ reapp_t2_mu*1
# reapp_t3 ~ reapp_t3_mu*1
# reapp_t4 ~ reapp_t4_mu*1
# neg_emo_t0 ~ neg_emo_t0_mu*1
# neg_emo_t1 ~ neg_emo_t1_mu*1
# neg_emo_t2 ~ neg_emo_t2_mu*1
# neg_emo_t3 ~ neg_emo_t3_mu*1
# neg_emo_t4 ~ neg_emo_t4_mu*1
# lat_x1 =~ 1*reapp_t0
# lat_x2 =~ 1*reapp_t1
# lat_x3 =~ 1*reapp_t2
# lat_x4 =~ 1*reapp_t3
# lat_x5 =~ 1*reapp_t4
# lat_y1 =~ 1*neg_emo_t0
# lat_y2 =~ 1*neg_emo_t1
# lat_y3 =~ 1*neg_emo_t2
# lat_y4 =~ 1*neg_emo_t3
# lat_y5 =~ 1*neg_emo_t4
# lat_x1 ~~ lat_y1
# lat_x2 ~~ r_xy*lat_y2
# lat_x3 ~~ r_xy*lat_y3
# lat_x4 ~~ r_xy*lat_y4
# lat_x5 ~~ r_xy*lat_y5
# lat_x2 ~ x_x*lat_x1 + x_y*lat_y1
# lat_x3 ~ x_x*lat_x2 + x_y*lat_y2
# lat_x4 ~ x_x*lat_x3 + x_y*lat_y3
# lat_x5 ~ x_x*lat_x4 + x_y*lat_y4
# lat_y2 ~ y_x*lat_x1 + y_y*lat_y1
# lat_y3 ~ y_x*lat_x2 + y_y*lat_y2
# lat_y4 ~ y_x*lat_x3 + y_y*lat_y3
# lat_y5 ~ y_x*lat_x4 + y_y*lat_y4
# lat_x1 ~~ lat_x1
# lat_x2 ~~ e_x*lat_x2
# lat_x3 ~~ e_x*lat_x3
# lat_x4 ~~ e_x*lat_x4
# lat_x5 ~~ e_x*lat_x5
# lat_y1 ~~ lat_y1
# lat_y2 ~~ e_y*lat_y2
# lat_y3 ~~ e_y*lat_y3
# lat_y4 ~~ e_y*lat_y4
# lat_y5 ~~ e_y*lat_y5"
# 
# riclpmModel <- 
# '
# #Note, the data contain x1-3 and y1-3
# #Latent mean Structure with intercepts
# 
# kappa =~ 1*x1 + 1*x2 + 1*x3
# omega =~ 1*y1 + 1*y2 + 1*y3
# 
# x1 ~ mu1*1 #intercepts
# x2 ~ mu2*1
# x3 ~ mu3*1
# y1 ~ pi1*1
# y2 ~ pi2*1
# y3 ~ pi3*1
# 
# kappa ~~ kappa #variance
# omega ~~ omega #variance
# kappa ~~ omega #covariance
# 
# #laten vars for AR and cross-lagged effects
# p1 =~ 1*x1 #each factor loading set to 1
# p2 =~ 1*x2
# p3 =~ 1*x3
# q1 =~ 1*y1
# q2 =~ 1*y2
# q3 =~ 1*y3
# 
# #Later, we may constrain autoregression and cross-lagged
# #effects to be the same across both lags.
# p3 ~ alpha3*p2 + beta3*q2
# p2 ~ alpha2*p1 + beta2*q1
# 
# q3 ~ delta3*q2 + gamma3*p2
# q2 ~ delta2*q1 + gamma2*p1
# 
# p1 ~~ p1 #variance
# p2 ~~ u2*p2
# p3 ~~ u3*p3
# q1 ~~ q1 #variance
# q2 ~~ v2*q2
# q3 ~~ v3*q3
# 
# p1 ~~ q1 #p1 and q1 covariance
# p2 ~~ q2 #p2 and q2 covariance
# p3 ~~ q3 #p2 and q2 covariance'
# 
# fit <- lavaan(mod_text, data = df_mod,
#               missing = 'ML', #for the missing data!
#               int.ov.free = F,
#               int.lv.free = F,
#               auto.fix.first = F,
#               auto.fix.single = F,
#               auto.cov.lv.x = F,
#               auto.cov.y = F,
#               auto.var = F)
# summary(fit, standardized = T)

Discussion

Summary of Replication Attempt

Open the discussion section with a paragraph summarizing the primary result from the confirmatory analysis and the assessment of whether it replicated, partially replicated, or failed to replicate the original result.

Commentary

Add open-ended commentary (if any) reflecting (a) insights from follow-up exploratory analysis, (b) assessment of the meaning of the replication (or not) - e.g., for a failure to replicate, are the differences between original and present study ones that definitely, plausibly, or are unlikely to have been moderators of the result, and (c) discussion of any objections or challenges raised by the current and original authors about the replication attempt. None of these need to be long.

Analysis requirements

sessionInfo()
## R version 3.5.3 (2019-03-11)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS Mojave 10.14.6
## 
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] lavaan_0.6-5       riclpmr_0.1.0.9000 lmerTest_3.1-0    
##  [4] lme4_1.1-21        Matrix_1.2-15      glue_1.3.1        
##  [7] broom.mixed_0.2.4  here_0.1           psych_1.8.12      
## [10] haven_2.1.1        forcats_0.4.0      stringr_1.4.0     
## [13] dplyr_0.8.3        purrr_0.3.3        readr_1.3.1       
## [16] tidyr_1.0.0        tibble_2.1.3       ggplot2_3.2.1     
## [19] tidyverse_1.2.1   
## 
## loaded via a namespace (and not attached):
##  [1] minqa_1.2.4         colorspace_1.4-1    sjlabelled_1.1.1   
##  [4] rprojroot_1.3-2     estimability_1.3    htmlTable_1.13.1   
##  [7] parameters_0.2.0    base64enc_0.1-3     rstudioapi_0.10    
## [10] ggrepel_0.8.1       mvtnorm_1.0-10      lubridate_1.7.4    
## [13] xml2_1.2.2          splines_3.5.3       mnormt_1.5-5       
## [16] knitr_1.25          sjmisc_2.8.2        zeallot_0.1.0      
## [19] Formula_1.2-3       jsonlite_1.6        nloptr_1.2.1       
## [22] ggeffects_0.13.0    broom_0.5.2         cluster_2.0.7-1    
## [25] compiler_3.5.3      httr_1.4.1          emmeans_1.4.2      
## [28] sjstats_0.17.7      backports_1.1.5     assertthat_0.2.1   
## [31] lazyeval_0.2.2      cli_1.1.0           acepack_1.4.1      
## [34] htmltools_0.4.0     tools_3.5.3         coda_0.19-2        
## [37] gtable_0.3.0        reshape2_1.4.3      Rcpp_1.0.3         
## [40] cellranger_1.1.0    vctrs_0.2.0         sjPlot_2.7.2       
## [43] nlme_3.1-137        insight_0.7.0       xfun_0.10          
## [46] irr_0.84.1          rvest_0.3.4         lpSolve_5.6.13.3   
## [49] lifecycle_0.1.0     MASS_7.3-51.1       scales_1.0.0       
## [52] hms_0.5.2           parallel_3.5.3      TMB_1.7.15         
## [55] RColorBrewer_1.1-2  yaml_2.2.0          gridExtra_2.3      
## [58] rpart_4.1-13        latticeExtra_0.6-28 stringi_1.4.3      
## [61] bayestestR_0.4.0    checkmate_1.9.1     boot_1.3-20        
## [64] rlang_0.4.1         pkgconfig_2.0.3     evaluate_0.14      
## [67] lattice_0.20-38     htmlwidgets_1.5.1   labeling_0.3       
## [70] tidyselect_0.2.5    plyr_1.8.4          magrittr_1.5       
## [73] R6_2.4.0            generics_0.0.2      Hmisc_4.2-0        
## [76] pillar_1.4.2        foreign_0.8-71      withr_2.1.2        
## [79] mgcv_1.8-27         survival_2.43-3     nnet_7.3-12        
## [82] performance_0.4.0   modelr_0.1.5        crayon_1.3.4       
## [85] rmarkdown_1.16      grid_3.5.3          readxl_1.3.1       
## [88] data.table_1.12.6   pbivnorm_0.6.0      digest_0.6.22      
## [91] xtable_1.8-4        numDeriv_2016.8-1.1 stats4_3.5.3       
## [94] munsell_0.5.0